Goto

Collaborating Authors

 Vatican City


Marine biologists discover 28 new deep sea species--and an old VHS tape

Popular Science

ROV pilots filmed this glass squid while exploring the Colorado-Rawson submarine canyon off the coast of Argentina. Breakthroughs, discoveries, and DIY tips sent six days a week. The marine biologists of the Schmidt Ocean Institute are a busy bunch. Over the last few years, scientists aboard the research vessel have spotted rare Antarctic squid, discovered multiple octopus near Costa Rica, and even cataloged over 100 potential new species off the coast of Chile. To kick off 2026, the Institute released a trove of new images and videos highlighting some of their latest observations from the south Atlantic Ocean.


How Christian Leaders Are Challenging the AI Boom

TIME - Tech

Pope Leo XIV made his first address to the College of Cardinals on May 10, 2025 in Vatican City, and touched upon the rise of artificial intelligence. Pope Leo XIV made his first address to the College of Cardinals on May 10, 2025 in Vatican City, and touched upon the rise of artificial intelligence. As technologists race to accelerate AI's progress with minimal guardrails, they are being met with increasing resistance from a powerful global contingent: Christian leaders and their congregations. Christians are not a monolith by any means. But this year, Christian leaders across sects--including Catholics, Evangelicals, and Baptists--sounded the alarm on AI's potential impact on family, human relationships, labor, and the church itself.


How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity

Alexander, Heather J., Simon, Jonathan A., Pinard, Frédéric

arXiv.org Artificial Intelligence

The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.


Sensitivity of Small Language Models to Fine-tuning Data Contamination

Scaria, Nicy, Kennedy, Silvester John Joseph, Subramani, Deepak

arXiv.org Artificial Intelligence

Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data contamination during instruction tuning remains poorly understood. We systematically investigate the contamination sensitivity of 23 SLMs (270M to 4B parameters) across multiple model families by measuring susceptibility to syntactic and semantic transformation types during instruction tuning: syntactic transformations (character and word reversal) and semantic transformations (irrelevant and counterfactual responses), each applied at contamination levels of 25\%, 50\%, 75\%, and 100\%. Our results reveal fundamental asymmetries in vulnerability patterns: syntactic transformations cause catastrophic performance degradation, with character reversal producing near-complete failure across all models regardless of size or family, while semantic transformations demonstrate distinct threshold behaviors and greater resilience in core linguistic capabilities. Critically, we discover a ``\textit{capability curse}" where larger, more capable models become more susceptible to learning semantic corruptions, effectively following harmful instructions more readily, while our analysis of base versus instruction-tuned variants reveals that alignment provides inconsistent robustness benefits, sometimes even reducing resilience. Our work establishes three core contributions: (1) empirical evidence of SLMs' disproportionate vulnerability to syntactic pattern contamination, (2) identification of asymmetric sensitivity patterns between syntactic and semantic transformations, and (3) systematic evaluation protocols for contamination robustness assessment. These findings have immediate deployment implications, suggesting that current robustness assumptions may not hold for smaller models and highlighting the need for contamination-aware training protocols.


AI Diffusion in Low Resource Language Countries

Misra, Amit, Zamir, Syed Waqas, Hamidouche, Wassim, Becker-Reshef, Inbal, Ferres, Juan Lavista

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is diffusing globally at unprecedented speed, but adoption remains uneven. Frontier Large Language Models (LLMs) are known to perform poorly on low-resource languages due to data scarcity. We hypothesize that this performance deficit reduces the utility of AI, thereby slowing adoption in Low-Resource Language Countries (LRLCs). To test this, we use a weighted regression model to isolate the language effect from socioeconomic and demographic factors, finding that LRLCs have a share of AI users that is approximately 20% lower relative to their baseline. These results indicate that linguistic accessibility is a significant, independent barrier to equitable AI diffusion.


Catholic clergy sex abuse survivors hopeful after Pope Leo meeting

BBC News

Survivors of sex abuse by members of the Catholic clergy have expressed hope after meeting Pope Leo at the Vatican for the first time. Gemma Hickey, board president of Ending Clergy Abuse (ECA Global), told the BBC it spoke volumes he had met them so soon in his papacy. The group is pushing for a global zero-tolerance policy, already adopted in the US, of permanently removing a priest who admits or is proven to have sexually abused a child. The Pope acknowledged there was resistance in some parts of the world to this, Hickey said. The new Pope, who assumed the role in May, has inherited the issue, which has haunted the Catholic Church for decades and the Vatican has struggled to root out.




On the Role of Unobserved Sequences on Sample-based Uncertainty Quantification for LLMs

Kunitomo-Jacquin, Lucie, Marrese-Taylor, Edison, Fukuda, Ken

arXiv.org Artificial Intelligence

Quantifying uncertainty in large language models (LLMs) is important for safety-critical applications because it helps spot incorrect answers, known as hallucinations. One major trend of uncertainty quantification methods is based on estimating the entropy of the distribution of the LLM's potential output sequences. This estimation is based on a set of output sequences and associated probabilities obtained by querying the LLM several times. In this paper, we advocate and experimentally show that the probability of unobserved sequences plays a crucial role, and we recommend future research to integrate it to enhance such LLM uncertainty quantification methods.


Evaluating Large Language Models for IUCN Red List Species Information

Uryu, Shinya

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are rapidly being adopted in conservation to address the biodiversity crisis, yet their reliability for species evaluation is uncertain. This study systematically validates five leading models on 21,955 species across four core IUCN Red List assessment components: taxonomy, conservation status, distribution, and threats. A critical paradox was revealed: models excelled at taxonomic classification (94.9%) but consistently failed at conservation reasoning (27.2% for status assessment). This knowledge-reasoning gap, evident across all models, suggests inherent architectural constraints, not just data limitations. Furthermore, models exhibited systematic biases favoring charismatic vertebrates, potentially amplifying existing conservation inequities. These findings delineate clear boundaries for responsible LLM deployment: they are powerful tools for information retrieval but require human oversight for judgment-based decisions. A hybrid approach is recommended, where LLMs augment expert capacity while human experts retain sole authority over risk assessment and policy.